Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve error message when run_command_async fails to find a command #1831

Conversation

alleyshairu
Copy link
Contributor

Improves error output as defined in the #1829.

Here's a contrived sample output.
output

I would like you hear your thoughts on the testcase that I have added. The test is currently asserting on a string value instead of some custom error type. I generally do not like matching against string in my tests, but in this instance we are panicking anyways, so it is fine? or we are better off with a custom error type such as RunCommandError

Copy link

codspeed-hq bot commented Nov 21, 2024

CodSpeed Performance Report

Merging #1831 will not alter performance

Comparing alleyshairu:passthrough_nodejs_and_python_error_message_fix (81ac320) with main (8dffbdf)

Summary

✅ 38 untouched benchmarks

@rukai
Copy link
Member

rukai commented Nov 21, 2024

The implementation

I agree that for errors exposed by a library a properly defined error enum is very important.
However, for this kind of adhoc application level error, I find that we are better off with the flexibility of anyhow.

I think that this error would actually benefit from using anyhow.
Currently we get the error:

called `Result::unwrap()` on an `Err` value: Custom { kind: NotFound, error: "Attempted to run the command `npm install` but npm does not exist. Have you installed nodejs?" }

However by swapping to anyhow we can make this a little cleaner.

called `Result::unwrap()` on an `Err` value: Attempted to run the command `npm install` but npm does not exist. Have you installed nodejs?

The test

Personally I wouldnt have written a test here, since the functionality is improving error messages in the test suite, not the application itself.
Regardless, I'm quite happy to accept one, it will still bring some value.

I see no issue with comparing strings in the test here, since what we want to test is that the error message is written in a specific way.

In fact the one issue with the test is that we arent properly comparing the strings.
Calling the logic under test twice and comparing the results isnt really testing anything at all.
Instead of calling command_not_found_error_message in the test, we should provide the specific string

Attempted to run the command `shotover_non_existent_command arg1 arg2` but shotover_non_existent_command does not exist. Have you installed shotover_non_existent_command?

and compare against that.

In general though, your PR is looking good.

test-helpers/src/lib.rs Outdated Show resolved Hide resolved
test-helpers/src/lib.rs Outdated Show resolved Hide resolved
Copy link
Member

@rukai rukai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@rukai rukai marked this pull request as ready for review November 22, 2024 00:51
Copy link
Member

@rukai rukai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reapproving after changes.

@rukai rukai enabled auto-merge (squash) November 22, 2024 06:53
@rukai rukai merged commit 63a8dcd into shotover:main Nov 22, 2024
41 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants